Goto

Collaborating Authors

 unjust discrimination


Fair Enough? A map of the current limitations of the requirements to have "fair" algorithms

Castelnovo, Alessandro, Inverardi, Nicole, Nanino, Gabriele, Penco, Ilaria Giuseppina, Regoli, Daniele

arXiv.org Artificial Intelligence

In the recent years, the raise in the usage and efficiency of Artificial Intelligence and, more in general, of Automated Decision-Making systems has brought with it an increasing and welcome awareness of the risks associated with such systems. One of such risks is that of perpetuating or even amplifying bias and unjust disparities present in the data from which many of these systems learn to adjust and optimise their decisions. This awareness has on one side encouraged several scientific communities to come up with more and more appropriate ways and methods to assess, quantify, and possibly mitigate such biases and disparities. On the other hand, it has prompted more and more layers of society, including policy makers, to call for "fair" algorithms. We believe that while a lot of excellent and multidisciplinary research is currently being conducted, what is still fundamentally missing is the awareness that having "fair" algorithms is per se a nearly meaningless requirement, that needs to be complemented with a lot of additional societal choices to become actionable. Namely, there is a hiatus between what the society is demanding from Automated Decision-Making systems, and what this demand actually means in real-world scenarios. In this work, we outline the key features of such a hiatus, and pinpoint a list of fundamental ambiguities and attention points that we as a society must address in order to give a concrete meaning to the increasing demand of fairness in Automated Decision-Making systems.


It's not big data that discriminates – it's the people that use it

#artificialintelligence

Data can't be racist or sexist, but the way it is used can help reinforce discrimination. The internet means more data is collected about us than ever before and it is used to make automatic decisions that can hugely affect our lives, from our credit scores to our employment opportunities. If that data reflects unfair social biases against sensitive attributes, such as our race or gender, the conclusions drawn from that data might also be based on those biases. But this era of "big data" doesn't need to to entrench inequality in this way. If we build smarter algorithms to analyse our information and ensure we're aware of how discrimination and injustice may be at work, we can actually use big data to counter our human prejudices. This kind of problem can arise when computer models are used to make predictions in areas such as insurance, financial loans and policing.